Â
language processing
Â
levels of analysis
Â
rational speech act model
Â
experimental data
Â
model fits
Â
reflection
\[ \definecolor{firebrick}{RGB}{178,34,34} \newcommand{\red}[1]{{\color{firebrick}{#1}}} \] \[ \definecolor{green}{RGB}{107,142,35} \newcommand{\green}[1]{{\color{green}{#1}}} \] \[ \definecolor{blue}{RGB}{0,0,205} \newcommand{\blue}[1]{{\color{blue}{#1}}} \] \[ \newcommand{\den}[1]{[\![#1]\!]} \] \[ \newcommand{\set}[1]{\{#1\}} \] \[ \newcommand{\tuple}[1]{\langle#1\rangle} \]
\[\newcommand{\States}{{T}}\] \[\newcommand{\state}{{t}}\] \[\newcommand{\Messgs}{{M}}\] \[\newcommand{\messg}{{m}}\]
Â
language processing
Â
levels of analysis
Â
rational speech act model
Â
experimental data
Â
model fits
Â
reflection
processing
Â
incrementality
build a syntactic & semantic representations as the sentence comes in
Â
predicitive
minimal sense: processing behavior is a function of current state
strong(est) sense: comprehender entertains hypotheses about the future
(Kuperberg & Jaeger 2016)
Â
Â
Â
computational
Â
algorithmic
Â
implementational
(Marr 1982)
(Franke & Jäger 2016)
what?
Â
when?
how?
Â
whence?
Â
pick account
Â
derive some (excentric) prediction
Â
design experiment
Â
refute & repeat
(Platt 1964)
Â
Â
Â
description <———————————————–> reason
Â
Â
Â
diagnostics
how would it work on Tralfamadore?
could you conceive of it without seeing any data?
data-generating models
statistical model comparison
Â
(e.g., Jurafsky 1996, Hale 2006, Levy 2008)
Â
literal listener picks literal interpretation (uniformly at random):
\[ P_{LL}(t \mid m) \propto P(t \mid [\![m]\!]) \]
Â
Gricean speaker approximates informativity-maximization:
\[ P_{S}(m \mid t) \propto \exp( \lambda P_{LL}(t \mid m)) \]
Â
pragmatic listener uses Bayes' rule to infer likely world states:
\[ P_L(t \mid m ) \propto P(t) \cdot P_S(m \mid t) \]
Â
interpretation as holistic: full & complete utterance
(e.g., Benz 2006, Frank & Goodman 2012)
messages are word sequences: \(\messg = w_1, \dots, w_n\)
initial subsequence of \(\messg\): \(\messg_{\rightarrow i} = w_1, \dots w_i\)
all messages sharing initial subsequence: \(\Messgs(\messg_{\rightarrow i}) = \set{\messg' \in \Messgs \mid \messg'_{\rightarrow i} = \messg_{\rightarrow i}}\)
next-word expectation:
\[P_L(w_{i+1} \mid \messg_{\rightarrow i}) \propto \sum_{\state} P(\state) \ \sum_{\messg' \in \Messgs(\messg_{\rightarrow i}, w_{i+1})} P_S(\messg' \mid \state)\]
\[P_L(\state \mid \messg_{\rightarrow i}) \propto P(\state) \ \sum_{\messg' \in \Messgs(\messg_{\rightarrow i})} P_S(\messg' \mid \state)\]
next-word
self-paced reading
eye-tracked reading
ERPs
…?
interpretation
visual worlds
mouse-tracking
…?
Noveck & Posada (2003)
Nieuwland et al. (2010)
Politzer-Ahles et al., (2013)
Hunt et al. (2013)
Â
Spychalska et al. (2016)
participants & procedure
sentence material
visual stimuli
general assumptions
Â
experimental microcosmos assumption
Â
specific assumptions
percentage of pragmatic responses per participant
Â
pragmatic responders
expect pragmatic speakers
\[P(\text{more informative true}) > P(\text{less informative true}) > P(\text{false})\]
Â
semantic responders
expect literal speakers
\[P(\text{more informative true}) = P(\text{less informative true}) > P(\text{false})\]
(Franke 2012, Franke & Degen 2016)
main issue
how to fix reasonable \(\States\) and \(\Messgs\)?
Â
experimental microcosmos assumption
all (and only?) meanings and forms that occur in the experiment
Â
prediction
massive influence of filler material
participants & procedure
sentence material
visual stimuli
behavioral data
only one participant consistently gave pragmatic judgements
ERP responses
no trace of pragmatic infelicity / expectations
Â